10 research outputs found

    Facial Emotion Expressions in Human-Robot Interaction: A Survey

    Get PDF
    Facial expressions are an ideal means of communicating one's emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In the case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real-time will be covered. For robotic facial expression generation, hand-coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand-coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real-time is comparatively lower. In the case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically. In this overview, state-of-the-art research in facial emotion expressions during human-robot interaction has been discussed leading to several possible directions for future research

    Facial emotion expressions in human-robot interaction: A survey

    Get PDF
    Facial expressions are an ideal means of communicating one's emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real time will be covered. For robotic facial expression generation, hand coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real time is comparatively lower. In case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically.Comment: Pre-print version. Accepted in International Journal of Social Robotic

    ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition

    Get PDF
    The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots' joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots

    Implications from Responsible Human-Robot Interaction with Anthropomorphic Service Robots for Design Science

    Get PDF
    Accelerated by the COVID-19 pandemic, anthropomorphic service robots are continuously penetrating various domains of our daily lives. With this development, the urge for an interdisciplinary approach to responsibly design human-robot interaction (HRI), with particular attention to human dignity, privacy, compliance, and transparency, increases. This paper contributes to design science, in developing a new artifact, i.e., an interdisciplinary framework for designing responsible HRI with anthropomorphic service robots, which covers the three design science research cycles. Furthermore, we propose a multi-method approach by applying this interdisciplinary framework. Thereby, our finding offer implications for designing HRI in a responsible manner

    ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition

    Get PDF
    The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots

    Responsible Human-Robot Interaction with Anthropomorphic Service Robots: State of the Art of an Interdisciplinary Research Challenge

    Get PDF
    Anthropomorphic service robots are on the rise. The more capable they become and the more regular they are applied in real-world settings, the more critical becomes the responsible design of human-robot interaction (HRI) with special attention to human dignity, transparency, privacy, and robot compliance. In this paper we review the interdisciplinary state of the art relevant for the responsible design of HRI. Furthermore, directions for future research on the responsible design of HRI with anthropomorphic service robots are suggested

    Top-down approach to compare the moral theories of deontology and utilitarianism in Pac-Man game setting

    No full text
    The processes underlying important decisions in many areas of our everyday lives are getting increasingly automatized. In the near future, as many decisions would be made by autonomous artificial agents, it would be necessary to ensure that these agents do not cause harm to society. Therefore, artificial agents need to be furnished with a way of acknowledging the moral dimension of their actions. In this study, we use a top-down approach to implement and compare two common moral theories, deontology and utilitarianism, in the same setting. While deontology focuses on the intention behind an action and the nature of an act, utilitarianism emphasizes that an action should be judged solely by the consequences it has and that it should maximize overall good. The differences between both theories need to be captured differently when implementing an artificial moral agent. Inspired by the famous Pac-Man game, we computationally model two moral Pac-Man agents based on top-down rules: a deontological one and a utilitarian one. Besides, we also model an amoral Pac-Man agent that does not take into account any ethical theory when guiding its actions. According to the theory of dyadic morality, every moral or immoral act involves an agent and a patient. In our Pac-Man world, we have an agent helping or harming a patient for every moral or immoral act. The amoral Pac-Man agent does not take into account whether its action would help or harm the patient. The deontological Pac-Man agent constrains its behavior depending on a set of prohibited actions and duties. On the contrary, the utilitarian Pac-Man agent evaluates the happiness and pain of the actions at hand to maximize the happiness, while trying to avoid unnecessary evils when possible. After implementing the agents, we compare their behaviour in the Pac-Man world. While the deontological Pac-Man agent may sometimes have to face conflict between succeeding and sticking to its value of always doing the right thing, the utilitarian Pac-Man agent always manages to succeed. In this study, we discuss the conflicts that arise for each moral agent, between their values and the goals of the game in different scenarios.Los procesos relacionados con nuestra vida cotidiana se están automatizando cada vez más. En un futuro cercano, muchas decisiones serán tomadas por agentes artificiales autónomos, y será necesario asegurar que estos agentes no causen daño a la sociedad. Por lo tanto, éstos necesitarán una forma de reconocer la dimensión moral de sus acciones. En este estudio, implementaremos y compararemos dos teorías morales comunes, la deontología y el utilitarismo en el mismo entorno. Mientras que la deontología se enfoca en la intención detrás de una acción y la naturaleza de un acto, el utilitarismo enfatiza en que una acción debe juzgarse únicamente por las consecuencias que tiene y que debe maximizar el bien general. Las diferencias entre ambas teorías deben ser capturadas de manera diferente al implementar un agente moral artificial. Inspirados en el famoso juego Pac-Man, modelaremos computacionalmente dos agentes morales de Pac-Man: uno deontológico y uno utilitario. Además, también modelaremos un agente amoral de Pac-Man que no tendrá en cuenta ninguna teoría ética al guiar sus acciones. Después de implementar los agentes, compararemos su comportamiento en el mundo de Pac-Man. Si bien el agente deontológico de Pac-Man a veces se tiene que enfrentar en un conflicto entre tener éxito y apegarse a su valor de hacer siempre lo correcto, el agente utilitario de Pac-Man siempre logra tener éxito. En este estudio, discutiremos los conflictos que surgen y los objetivos del juego en diferentes escenarios.Els processos relacionats amb la nostra vida quotidiana s'estan automatitzant cada vegada més. En un futur pròxim, moltes decisions seran preses per agents artificials autònoms, i caldrà assegurar que aquests agents no causin cap mal a la societat. Per tant, aquests necessitaran una forma de reconèixer la dimensió moral de les seves accions. En aquest estudi, implementarem i compararem dues teories morals comunes, la deontologia i l'utilitarisme; en el mateix entorn. Mentre que la deontologia s'enfoca en la intenció darrere d'una acció i la naturalesa d'un acte, l'utilitarisme emfatitza en què una acció ha de jutjar-se únicament per les conseqüències que té i que ha de maximitzar el bé general. Les diferències entre les dues teories han de ser capturades de manera diferent a l'implementar un agent moral artificial. Inspirats en el famós joc Pac-Man, modelarem computacionalment dos agents morals de Pac-Man: un deontològic i un utilitari. A més, també modelarem un agent amoral de Pac-Man que no tindrà en compte cap teoria ètica en les seves accions. Després d'implementar els agents, compararem el seu comportament en el món de Pac-Man. Si bé l'agent deontològic de Pac-Man de vegades s'ha d'enfrontar en un conflicte entre tenir èxit i apegar al seu valor de fer sempre el correcte, l'agent utilitari de Pac-Man sempre aconseguirà tenir èxit. En aquest estudi, discutirem els conflictes que sorgiran i els objectius del joc en diferents escenaris

    Facial Emotion Expressions in Human–Robot Interaction: A Survey

    No full text
    Facial expressions are an ideal means of communicating one’s emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In the case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real-time will be covered. For robotic facial expression generation, hand-coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand-coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real-time is comparatively lower. In the case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically. In this overview, state-of-the-art research in facial emotion expressions during human–robot interaction has been discussed leading to several possible directions for future research
    corecore